Go top
Paper information

NN2Poly: a polynomial representation for deep feed-forward artificial neural networks

P. Morala Miguélez, J. Cifuentes, R.E. Lillo, I. Ucar

IEEE Transactions on Neural Networks and Learning Systems

Summary:

Interpretability of neural networks (NNs) and their underlying theoretical behavior remain an open field of study even after the great success of their practical applications, particularly with the emergence of deep learning. In this work, NN2Poly is proposed: a theoretical approach to obtain an explicit polynomial model that provides an accurate representation of an already trained fully connected feed-forward artificial NN a multilayer perceptron (MLP). This approach extends a previous idea proposed in the literature, which was limited to single hidden layer networks, to work with arbitrarily deep MLPs in both regression and classification tasks. NN2Poly uses a Taylor expansion on the activation function, at each layer, and then applies several combinatorial properties to calculate the coefficients of the desired polynomials. Discussion is presented on the main computational challenges of this method, and the way to overcome them by imposing certain constraints during the training phase. Finally, simulation experiments as well as applications to real tabular datasets are presented to demonstrate the effectiveness of the proposed method.


Spanish layman's summary:

NN2Poly propone convertir redes neuronales en modelos polinómicos. Este enfoque utiliza expansiones de Taylor y propiedades combinatorias para calcular los coeficientes, abordando desafíos computacionales mediante la introducción de restricciones en el entrenamiento. Su eficacia se valida con datos reales


English layman's summary:

NN2Poly proposes converting fully connected neural networks (NN) into polynomial models. This approach uses Taylor expansions and combinatorial properties for calculating coefficients, addressing computational challenges by introducing training constraints. Its efficacy is validated with real data. 


Keywords: Interpretability, machine learning, multilayer perceptron (MLP), multiset partitions, neural networks (NNs), polynomial representation.


JCR Impact Factor and WoS quartile: 10,200 - Q1 (2023)

DOI reference: DOI icon https://doi.org/10.1109/TNNLS.2023.3330328

In press: November 2023.



Citation:
P. Morala Miguélez, J. Cifuentes, R.E. Lillo, I. Ucar, NN2Poly: a polynomial representation for deep feed-forward artificial neural networks. IEEE Transactions on Neural Networks and Learning Systems.


    Research topics:
  • Data analytics

pdf Preview
Request Request the document to be emailed to you.